comment and suggestion
- Research Report > New Finding (0.30)
- Research Report > Experimental Study (0.30)
We thank the reviewers for their comments and suggestions, which will help us better present our work
We thank the reviewers for their comments and suggestions, which will help us better present our work. We will include the comparisons in the camera ready, if accepted. We agree that Charades represents a good dataset for evaluation. Next we will perform experiments on Charades and present them in future work. More detailed analysis and discussion: We thank the reviewer for this suggestion. We will include computation times in the final version.
- Research Report > New Finding (0.30)
- Research Report > Experimental Study (0.30)
Is Model Editing Built on Sand? Revealing Its Illusory Success and Fragile Foundation
Liu, Wei, Xu, Haomei, Liu, Bingqing, Deng, Zhiying, Wang, Haozhao, Wang, Jun, Li, Ruixuan, Teh, Yee Whye, Lee, Wee Sun
Large language models (LLMs) inevitably encode outdated or incorrect knowledge. Updating, deleting, and forgetting such knowledge is important for alignment, safety, and other issues. To address this issue, model editing has emerged as a promising paradigm: by precisely editing a small subset of parameters such that a specific fact is updated while preserving other knowledge. Despite its great success reported in previous papers, we find the apparent reliability of editing rests on a fragile foundation and the current literature is largely driven by illusory success. The fundamental goal of steering the model's output toward a target with minimal modification would encourage exploiting hidden shortcuts, rather than utilizing real semantics. This problem directly challenges the feasibility of the current model editing literature at its very foundation, as shortcuts are inherently at odds with robust knowledge integration. Coincidentally, this issue has long been obscured by evaluation frameworks that lack the design of negative examples. To uncover it, we systematically develop a suite of new evaluation methods. Strikingly, we find that state-of-the-art approaches collapse even under the simplest negation queries. Our empirical evidence shows that editing is likely to be based on shortcuts rather than full semantics, calling for an urgent reconsideration of the very basis of model editing before further advancements can be meaningfully pursued.
- Europe > Austria > Vienna (0.15)
- Asia > Singapore (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (7 more...)
- Research Report > Promising Solution (0.48)
- Overview > Innovation (0.34)
Dear Reviewers R1, R2, and R3: Thank you for your comments and suggestions to improve our paper
Dear Reviewers R1, R2, and R3: Thank you for your comments and suggestions to improve our paper. In contrast, BRN's statistical estimates are based on batches After tuning its hyperparameters, we observed that it performs worse (Figure 2). ON removes the batch size parameter and introduces two decay rate parameters. We will include this figure in the paper's appendix. Note, this is not the best value observed in the sweep.